Goto

Collaborating Authors

 woman and minority


Clearview AI on track to win U.S. patent for facial recognition technology

#artificialintelligence

Civil rights groups argue that facial recognition technology is error-prone, misidentifying women and minorities at higher rates than it does white men and sometimes leading to false arrests. Clearview AI has gotten the green light on a federal patent for its facial recognition technology -- an award that the company says is the first to cover a so-called "search engine for faces" that crawls the internet to find matches. Clearview's software -- which scrapes public images from social media to help law enforcement match images in government databases or surveillance footage -- has long faced fire from privacy advocates who say it uses people's faces without their knowledge or consent. Civil rights groups also argue that facial recognition technology is generally error-prone, misidentifying women and minorities at higher rates than it does white men and sometimes leading to false arrests.


IBM working on AI tools to fight bias in online ads - TechCentral.ie

#artificialintelligence

Seeking to address discrimination concerns, IBM is working on new artificial intelligence (AI) tools that would make sure online advertising algorithms don't unfairly exclude women and minorities. Researchers and civil rights groups have found some online audiences, including black people and women, can be excluded from seeing employment listings, housing ads, and other ads due to automated advertising strategies. Following complaints by federal regulators and activists, Google and Facebook, the world's largest sellers of online ads, have enacted changes to reverse this trend. Now, IBM is launching a new AI project to combat this problem, according to Reuters. IBM says a team of 14 employees will research "fairness" in online ads over the rest of 2021.


Executive Interview: Beena Ammanath Boosts Women and Diversity in Tech as AI Expands - AI Trends

#artificialintelligence

Beena Ammanath is the Founder and CEO of Humans For AI, a nonprofit organization focused on increasing diversity in tech leveraging AI. She is a recognized lead and industry expert who has driven pioneering technology changes in the use of AI, Data and Analytics for several market-leading companies. She has worked as a mentor to help women and minorities enter the new economy. She started Humans for AI in 2017 to help make AI understandable to the non-tech community. Beena is also an Industrial Board Member of Cal Poly University, where she brings the industry perspective to influence curriculum engineers.


Artificial Intelligence has a gender problem -- why it matters for everyone

#artificialintelligence

More women and minorities must work in tech, or else they risk being left behind in every industry. This grim future was painted by Artificial Intelligence (AI) equality experts who spoke at a conference Thursday hosted by LivePerson, an AI company that connects brands and consumers. In that future, if AI goes unchecked, workplaces will be completely homogenous, hiring only white, nondisabled men. "In this bleak depiction of our future, decades of fights for civil rights and equality have been unwritten in a few lines of code," said EqualAI executive director Miriam Vogel at the conference in Brooklyn, N.Y., called "Boundary Breakers: Women Driving The Future of Tech." Women and minorities are not building AI, and therefore, they are not being represented in popular algorithm-based products, according to Vogel.

  Country: North America > United States > New York > Kings County > New York City (0.25)
  Industry: Education (0.31)

Can AI's Racial & Gender Bias Problem Be Solved?

#artificialintelligence

Artificial intelligence (AI) algorithms are complex packets of code that strive to learn on given training data. But when this training data is flawed, not well-rounded, or biased, the algorithm quickly spirals into discrimination too. For women and minorities, these systemic AI issues can quickly become harmful. Bias in AI algorithms doesn't only occur because of problems in training data. When you dig deeper, it becomes readily apparent that bias often comes from how an AI developer frames a scenario or problem.


AI: Understanding bias and opportunities in financial services

#artificialintelligence

It is undeniable that our lives have been made better by artificial intelligence (AI). AI technology allow us to get almost anything, anytime, anywhere in the world at the click of a button; prevent disease epidemics and keep them from spiralling out of control, and generally just make day-to-day life a bit easier by helping us to save energy, book a babysitter, manage our cash and our health all at a very low cost. AI's penetration into systems and processes in virtually all sectors of business and life has been rapid and global. The speed and scale at which AI is proliferating does however raise the question of how at-risk we may be that the AI we are building for good can also be introducing damaging bias at scale. In this two-part series, I explore the issues with AI constructs, the good bad and the ugly and how we can think about shaping a future through AI in financial services that helps lift people up rather than scaling problems up.


AI is Changing our Payment Systems Forever but Does Everyone Benefit? - Services Juridiques Gagné Legal Services

#artificialintelligence

The financial industry has long been riddled with inequalities. These inequalities are closely aligned with social inequalities, which show bias against women and minorities. Access to services, approval for loans, and availability of products that fit their needs have all been problems in the past for these two groups. But now, along comes Artificial Intelligence (AI) to hopefully remove the human bias that's plagued the financial industry. FinTech promises a revolution and in many sectors, it's already begun.


Most TV computer scientists are still white men. Google wants to change that.

USATODAY - Tech Top Stories

Girls who have seen the first season of Hyperlinked, an original series on Google's YouTube Red, are 11% more likely to be interested in computer science careers than viewers who have not watched the show, according to a new study. SAN FRANCISCO -- Google is calling on Hollywood to give equal screen time to women and minorities after a new study the Internet giant funded found that most computer scientists on television shows and in the movies are played by white men. It does not inspire underrepresented groups to pursue careers in computer science, says Daraiha Greene, Google CS in Media program manager, multicultural strategy. "We are not trying to erase that image, but we want to diversify and show other people in these roles as well," Greene said. More than three-quarters of characters engaged with computer science are men and more than two-thirds are white while 17.2% are Asian and 15.5% are from underrepresented racial and ethnic groups, according to the study from the USC Annenberg School for Communication and Journalism.


Google explains how artificial intelligence becomes biased against women and minorities

#artificialintelligence

Time and again, research has shown that the machines we build reflect how we see the world, whether consciously or not. For artificial intelligence that reads text, that might mean associating the word "doctor" with men more than women, or image-recognition algorithms that misclassify black people as gorillas. Google, which was responsible for the gorilla error in 2015, is now trying to educate the masses on how AI can accidentally perpetuate the biases held by its makers. As an example, Google asked users to draw a shoe. Users drew a man's shoe, so the system didn't know that high heels were also shoes.


Science Jobs, Technology Jobs for Women and Minorities: Educational CyberPlayground

AITopics Original Links

Computers and the Internet: Listening to Girls' Voices – Dorothy Ellen Wilcox concludes that "instead of socializing adolescent girls toward docility, non-hierarchical technology like the Internet may provide a discourse for development of higher-level cognitive skills and the ability to unmask inequities in power and politics."